Skip to content

Use StratifiedStandardize for per-task Y standardization in TL (#5194)#5194

Open
hvarfner wants to merge 2 commits into
facebook:mainfrom
hvarfner:export-D102197139
Open

Use StratifiedStandardize for per-task Y standardization in TL (#5194)#5194
hvarfner wants to merge 2 commits into
facebook:mainfrom
hvarfner:export-D102197139

Conversation

@hvarfner
Copy link
Copy Markdown

@hvarfner hvarfner commented Apr 29, 2026

Summary:

Adds per-task outcome standardization to the transfer learning adapter, ensuring each task's observations are standardized independently rather than jointly. Updates the default transform pipeline to use TL-specific outcome transforms.

This removes ambiguity on whether the right transforms have been applied (e.g. QuickBO/warm-starting), where standardization is not performed across, but within experiments.

Differential Revision: D102197139

@meta-cla meta-cla Bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Apr 29, 2026
@meta-codesync
Copy link
Copy Markdown

meta-codesync Bot commented Apr 29, 2026

@hvarfner has exported this pull request. If you are a Meta employee, you can view the originating Diff in D102197139.

hvarfner pushed a commit to hvarfner/Ax that referenced this pull request Apr 29, 2026
…ook#5194)

Summary:

Adds per-task outcome standardization to the transfer learning adapter, ensuring each task's observations are standardized independently rather than jointly. Updates the default transform pipeline to use TL-specific outcome transforms.

This removes ambiguity on whether the right transforms have been applied (e.g. QuickBO/warm-starting), where standardization is not performed across, but within experiments.

Differential Revision: D102197139
@hvarfner hvarfner force-pushed the export-D102197139 branch from 8046f90 to 407478c Compare April 29, 2026 14:56
@meta-codesync meta-codesync Bot changed the title Use StratifiedStandardize for per-task Y standardization in TL Use StratifiedStandardize for per-task Y standardization in TL (#5194) Apr 29, 2026
hvarfner pushed a commit to hvarfner/Ax that referenced this pull request Apr 29, 2026
…ook#5194)

Summary:

Adds per-task outcome standardization to the transfer learning adapter, ensuring each task's observations are standardized independently rather than jointly. Updates the default transform pipeline to use TL-specific outcome transforms.

This removes ambiguity on whether the right transforms have been applied (e.g. QuickBO/warm-starting), where standardization is not performed across, but within experiments.

Differential Revision: D102197139
@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 29, 2026

Codecov Report

❌ Patch coverage is 98.38710% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 96.61%. Comparing base (14d3475) to head (927965b).

Files with missing lines Patch % Lines
ax/adapter/transfer_learning/adapter.py 92.85% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #5194      +/-   ##
==========================================
+ Coverage   96.38%   96.61%   +0.23%     
==========================================
  Files         617      617              
  Lines       69605    69638      +33     
==========================================
+ Hits        67090    67284     +194     
+ Misses       2515     2354     -161     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@hvarfner hvarfner force-pushed the export-D102197139 branch from 407478c to e407627 Compare May 12, 2026 20:30
hvarfner pushed a commit to hvarfner/Ax that referenced this pull request May 12, 2026
…ook#5194)

Summary:

Adds per-task outcome standardization to the transfer learning adapter, ensuring each task's observations are standardized independently rather than jointly. Updates the default transform pipeline to use TL-specific outcome transforms.

This removes ambiguity on whether the right transforms have been applied (e.g. QuickBO/warm-starting), where standardization is not performed across, but within experiments.

Differential Revision: D102197139
Carl Hvarfner added 2 commits May 14, 2026 11:36
…acebook#5200)

Summary:
**Motivation:** model_space/search_space was not properly used - parameter bounds on the search space would be set from the union of source and target, and parameters that were fixed on the target would be RangeParameters if the model_space contained a Fixed/Range change in the parameter.

Adds a `data_parameters` argument to `TorchAdapter._get_fit_args` that decouples SSD construction (model params) from data column extraction (target params). This lets the TL adapter set `_model_space` to include source-only RangeParameters directly, so the SSD naturally covers the full joint feature space -- eliminating the need for the `_expand_ssd_to_joint_space` post-hoc expansion.

Differential Revision: D104702983
…ook#5194)

Summary:

Adds per-task outcome standardization to the transfer learning adapter, ensuring each task's observations are standardized independently rather than jointly. Updates the default transform pipeline to use TL-specific outcome transforms.

This removes ambiguity on whether the right transforms have been applied (e.g. QuickBO/warm-starting), where standardization is not performed across, but within experiments.

Differential Revision: D102197139
@hvarfner hvarfner force-pushed the export-D102197139 branch from e407627 to 927965b Compare May 14, 2026 18:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants